Archive pour la catégorie 'tips & tricks'

Pinning Objects in Oracle

Jeudi 6 décembre 2007

What: Pinning an Object to the shared pool area is the process of tellin oracle to leave an object in the shared pool at all time - unless the database is shutdown.

Why: Pinning Oracle objects prevents oracle from doing moving objects in and out of the shared pool which can degrade performance (specifically if the objects in question are core libraries (large core packages that always get called by your applications - i.e. Web application )

When: Pinning should happen when a database is started, while calling your custom statup.sh script, u can call a pin_objs.sh which can in turn call a sql file, that determines what objects to pin and then pins them to the shared pool.

How: Let s get to the core of this

insert into OBJECTS_TO_PIN (object_name, object_type_flag,
owner, date_entered)
SELECT name, decode(type,’PACKAGE’, ‘P’,'PACKAGE BODY’,'P’,
‘TRIGGER’,'R’,'PROCEDURE’,'P’), owner, sysdate
FROM v$db_object_cache
WHERE type in (’TRIGGER’,'PROCEDURE’,'PACKAGE BODY’)
AND executions > 0
AND kept = ‘NO’
AND (loads > 1
OR sharable_mem >=40000)
ORDER BY
sharable_mem desc,
loads,
executions desc;

commit;

——————————————————————————————————

******Once all these objects are identified and inserted into your table OBJECTS_TO_PIN******

——————————————————————————————————
declare
own varchar2(100);
nam varchar2(100);
flagx varchar2(1);

cursor objs is
select owner, object_name, object_type_flag
from objects_to_pin;

begin
open objs;
loop
fetch objs into own, nam, flagx;
exit when objs%notfound;
dbms_shared_pool.keep(own||’.'||nam, flagx);
end loop;

end;
/

USE Pinning it will have a great advantage on performance of your DBs.

I hope you enjoyed reading :)

Tip of the day !

Jeudi 22 mars 2007

Let me ask you a legitimate question:
How does your organization control code versioning ?

IT departments will often struggle with this unless of course your organization writes software in which case a versionning software is a must and more likely the IT department will tag along and use a Depot. Among the bunch in the market the most famous are : Perforce, CVS, OldVersion.

So most likely, the last version in your release branch will exist on your production system. Suppose however that you have some doubt there could be discrepencies between your Version software and what s on production. How do u go about getting that code from production ???

I received this request this morning and found myself asking this:”If I honor this request, how many like it will come up !!!”, This examples shows how you can get source code from a production environment dumped to your dev/test env:

rem **********************************************************************
rem $Id:  $
rem Title:  Get Package Spec & Body
rem Description:  Get Package SQL from Prod to Dev or Test.
rem History:
rem 20-Mar-2007  Fahd Ayoubi         Created.
rem
rem **********************************************************************

set verify off
set heading off
set serveroutput off
set pause off
set feedback off
set time off

define package=&pkg_name
spool get_pkgtext.log

select text
from dba_source
where owner = ‘APPS’ AND NAME = ‘&package’ AND Type = ‘PACKAGE’
UNION ALL
select text
from dba_source
where owner = ‘APPS’ and NAME = ‘&package’ AND Type = ‘PACKAGE BODY’;
/

spool off

The code above will prompt you for a package name. when that is entered, all the source code from that package is dumped into a file.

You can wrap the script above with a sh script that allows you to ftp rename and execute the generated file assentially copying 100% of your production code into whatever environment you choose.

-Thanks for reading

Gather Table stats - dbms_stats

Mercredi 28 février 2007

Oracle Applications 11i utilizes the Cost-Based optimizer, so it is important that the database has current statistics for all tables and indexes. Oracle provides a set of utilities to be used to gather statistics on a regular basis. These tools are:

Concurrent Programs: “Gather Schema Stats” and “Gather Table Stats”
Package: FND_STATS

The concurrent programs call the FND_STATS package to gather statistics. The FND_STATS package is a “wrapper” for DBMS_STATS which uses extra information about the oracle applications table to calculate histograms for certain tables, and exclude certain interface tables from normal statistics gathering.

The old fashioned ⿿analyze table⿝ and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. This is because the cost-based SQL Optimizer (CBO) relies on the quality of the statistics to choose the best execution plan for all SQL statements. The dbms_stats utility does a far better job in estimating statistics, especially for large partitioned tables, and the better stats results in faster SQL execution plans.

Let⿿s see how dbms_stats works. It⿿s easy! Here is a sample execution of dbms_stats with the options clause:

exec dbms_stats.gather_schema_stats( -
ownname => ‘FAHD’, -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => ‘for all columns size repeat’, -
degree => 34 -
)

I found that the best practice is to bundle your sql calls in a shell script that you call from a cron (a scheduled job at the unix level) here is an example:

15 6 * * 0 ORACLE_SID=fahd_db; . $HOME/cron.env > /dev/null; /mount01/app/oracle/admin/table_stats.sh ALL_TABLES 15 50000000 > /u01/app/oracle/admin/stats_zwebt.out

15 6 * * 0 ———–> This means the cron will run weekly at 6:15 pm More details
table_stats.sh ———> This is the shell scrip that will be execeuted.

ALL_TABLES 15 50000000 ——> Three parameters, analyze all tables, 15%, tables with less than 50 million records.

stats_zwebt.out ———-> Is the file where everything will be dumped; a log file for record trail.

Thanks for reading.

Concurrent Manager - FND Trick

Jeudi 15 février 2007

Assume the following;

You refresh your development instance from your porsuction environment. Guess what, if some concurent manager was running during the Hot backup, they wil lbe flagged as running during the refresh (snapshot). So , you want to go in the application layer and kill these requests . the screenshot below shows how you acheive that.Concurent Manager

However, the step above will not flag the request to be killed. Witout getting into too much details, Oracle’s way of dealing with the phantom phenomenon is exactly this. So the question is how u make sure it s dead. You can do that using the following query:

update APPLSYS.FND_CONCURRENT_REQUESTS
set PHASE_CODE=’C', STATUS_code=’C’
where request_id = 278373874;/

The query above will update phase_code and status_code which pretty much define what you see on the apps tier.

APPLSYS.FND_CONCURRENT_REQUESTS

Remember that FND tables are propriatory to oracle and no DDLs should be applied. Also they are defined under APPLSYS schema. You need to have sysdba access to be able to run the query and have access to “system Administrator” on the apps tier.

11i in debug mode

Jeudi 8 février 2007

Ever see weird behavior with Oracle apps 11i (11.5.10). Like that s suppose to be news. Well many times you ll see some of the weirdest errors (does not really matter what module it is) AR HR BOM(Bill Of Materials), Manufacturing, etc…

Each enterprise has its little ways of dealing with apps issues. Besides the common “oracle best practice” dos and donts, many DBAs will have scripts they will run to narrow the problem down to something much easier to debug.

In my experience visibility into what the application is doing against the DB is the most valuable troubleshooting technique. The best way to acheive that level of visibility is to turn your log files into more than just added KB files. Here is how do do it:

  • Log into the apps tier server as apps tier manager.
  • Edit $APACHE_TOP/Apache/conf/httpd.conf

LogLevel debug

  • Edit $APACHE_TOP/Jserv/etc/jserv.conf

ApJServLogLevel debug

  • Edit $APACHE_TOPJserv/etc/jserv.properties

log=true
wrapper.bin.parameters=-DAFLOG_ENABLED=TRUE
wrapper.bin.parameters=-DAFLOG_LEVEL=UNEXPECTED
wrapper.bin.parameters=-DAFLOG_MODULE=%
wrapper.bin.parameters=-DAFLOG_FILENAME=$APACHE_TOP/Apache/logs/aferror.log

  • Edit $APACHE_TOP/Jserv/etc/ssp_init.txt

DebugLevel=5
DebugSwitch=ON

  • Bounce the Apache Services for changes to take effect

Thanks for reading :) .

Table Backup - Import / export

Mercredi 31 janvier 2007

Background:
Assume you are taking a new piece of code out to a production environment, and you will be deleting large chunks of data from your tables.
Needless to day You will need to backup you data somewhere, where you can easily (without a high level of effort, downtime, or the help of a DBA) rollback your changes.
How do you achieve this ?

Analysis:
Three ways come to mind when faced with an issue such as this:
a. Do nothing, and recover the whole database if needed (from the previous nights backup) - you better have one !
b. Create the same table with table_name_bkp and do the following:
Insert into table_name_bkp (select * from table_name);
c. Import / Export with a pipe :) .

A & B are not feasible solutions for the following respective reasons:
1. Doing a complete DB refresh for a single table will wipe off everyone’s work.
2. cluttering your table space with an extra table you ‘may’ drop later is useless.

Solution:
How to pipe:
1. Make a pipe using the mnod command
2. Start the export using an export parameter file. Export to the pipe rather than a normal dump file.
3. At the same time start the import using an import parameter file. Import from the pipe rather than the normal dump file.
4. Clean up and remove the pipe

We used the following shell scripts:
1. Export:

#!/bin/sh
passwd=$1
NLS_LANG=AMERICAN_AMERICA.UTF8;
export NLS_LANG;
#echo ==========================================
#echo Running script to Export your tables
#echo ——————————————
cat $oracle_home/connections/user.dat | awk -F= ‘{print $2}’| exp PARFILE=/somedir/export.dat;

Your dat file will look like:

#!/bin/sh

#echo ==========================================
#echo niwc_export.dat script
#echo ——————————————

FILE=niwc.dmp
TABLES=(TABLE_LIST) #CSV
DIRECT=Y
LOG=export.log
COMPRESS=N
CONSISTENT=N
CONSTRAINTS=Y
FULL=N
GRANTS=Y
HELP=N
INDEXES=Y
RECORD=Y
ROWS=Y
STATISTICS=NONE

2. Import:

#!/bin/sh
passwd=$1
NLS_LANG=AMERICAN_AMERICA.UTF8;
export NLS_LANG;
#echo ==========================================
#echo Running script to Import your tables
#echo ——————————————
cat $oracle_home/connections/user.dat | awk -F= ‘{print $2}’| imp PARFILE=/somedir/import.dat;
-Thanks for reading :)

Open Cursors - KO issue

Dimanche 21 janvier 2007

The Problem

We are going into the 36th hour of this go-live and I can say that this has been the smoothest execution by DBAs and programmers together. Then you might ask yourself go get some sleep ! Up Until the 37th hours -exactly 1 hour away from signing off - every single application had been signed off and everything works great. But then We get the call that the Configurator application is running into some issues and errors.
During a Request run through Oracle apps we noticed that the Concurent Manager hang and the request would fail with errors. Among the large error messages/codes, the following error was embedded in there:

FDPSTP failed Due to ORA-03113: end-of-file-communication. The it goes on to say AFPGRG failed due to ORA-01000: maximum open cursors exceeded.

If you know Oracle Architecture then you will right away recognize that in the init.ora file, Oralce determines the value of the parameter open_cursors. In our case it was 750.

The Resolution:
You may think of running the following:

select count(*) from v$open_cursor; — That query will come back with not only open cursors but also the cached anc closed ones. So do not trust the results of the query.

So finally I executed this statement on the DB:
alter system set open_cursors=1250 scope=memory;
(
show parameters open
NAME TYPE VALUE
———————————— ———– ——————————
open_cursors integer 1250
open_links integer 20
open_links_per_instance integer 4
read_only_open_delayed boolean FALSE
session_max_open_files integer 10
)

This updates the value dynamically but in order to permanently make the change. init.ora will need to be updated and the database will have to be bounced.

Lessons Learned:

1) When using oracle v$ view, you need to understand the content of the view - the name is not reflective of the content.
2) when you do code reviews, make sure cursors are openned and closed appropriatly.
3) When changing a DB parameter such as open_cursor, make sure you monitor the DB closely for any unwanted effects.

@38th hour - now ! we all headed home and went to bed !

Thanks for reading. :) .

Dont just Create Indexes ! Monitor them.

Vendredi 19 janvier 2007

As DBAs we will most of the time turn into a pain in the *** asking developers and Programmer analysts to create indexes on their tables. Most Organizations have code reviews, where a code install is usually reviewed prior to rolling it into a production environment. Unfortunatly, we all know how code reviews turn into a “we’ll get it done later” kind of how documentation is usually ignored.

I was faced with the following question:
Which Indexes are being used by the database ?

Oracle will take a big performance hit if your DB creates Indexes and does not use them. Why ?
Because Oracle looks in the index repository before running a query (while it is building the execution path/plan it will follow to run your query).That step is not needed if that index is not used !

To identify all unused indexes in the database, you can do the
following:

(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM
(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM
(c) Connect as a user with ALTER ANY INDEX system privilege and run
the start monitoring script
(d) Perform normal activities in your database
(e) After a period of time that is representative of your workload,
run the stop monitoring script
(f) Query v$object_usage to see what indexes have not been used

Detailed steps:

(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool startmonitor.sql
select ‘alter index ‘||owner||’.'||index_name||’ monitoring usage;’
from dba_indexes
where owner not in (’SYS’,'SYSTEM’);
spool off

(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool stopmonitor.sql
select ‘alter index ‘||owner||’.'||index_name||’ nomonitoring usage;’
from dba_indexes
where owner not in (’SYS’,'SYSTEM’);
spool off

(c) Connect as a user with ALTER ANY INDEX system privilege and run
the newly created script (from step a) to start monitoring.

@startmonitor

(d) Perform normal activities in your database

(e) After a period of time that is representative of your workload,
connect as a user with ALTER ANY INDEX system privilege and run
the script to stop monitoring(from step b).

@stopmonitor

(f) Query v$object_usage in join with dba_indexes, to see what indexes
have not been used

select d.owner, v.index_name
from dba_indexes d, v$object_usage v
where v.used=’NO’ and d.index_name=v.index_name;

Thanks for reading again ;)

Find DB Object

Mardi 9 janvier 2007

You have a large database, you have done everything you can with tablespaces and databafiles to seperate objects etc… but you just cant seem to find what you are looking for.

The following script helps you find the object you are looking for :

[ fayoubi on thatbox:] sqlplus “/ as sysdba” @/home/fayoubi/dbascripts/logon.sql

SQL*Plus: Release 10.2.0.2.0 - Production on Tue Jan 9 15:57:47 2007

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

15:57:47 SQL> @$home/dba_scripts/findit
SPOOLING
Enter value for obj_name: MAROCIT
old 6: where object = upper(’&obj_name’) and a.sid = s.sid
new 6: where object = upper(’NIWC_CACHE’) and a.sid = s.sid

no rows selected

Elapsed: 00:00:00.13
DONE SPOOLING
15:58:12 SQL>
select
distinct s.sid, s.serial# , to_char(logon_time, ‘DD-MON-RR HH24:MI’),object
from v$session s, v$access a
where object LIKE upper(’%&obj_name%’) and a.sid = s.sid

Remember that the use of v$ views is restricted to the sysdba user. So do not attemp to run this script if you cannot login as sysdba

-Thanks for reading again ;)